In my Xcode 15.x, it's under: File > New > Package...
Post
Replies
Boosts
Views
Activity
I'm afraid eye tracking information isn't currently available at all, regardless of the immersion level of your app. When creating an app in the mixed space, Apple handles eye tracking for you (but it's abstracted away), but when creating a fully immersive app you get nothing. So you'll need to come up with an alternative mechanism of input, for example using a game controller to manipulate an onscreen cursor.
It's a shame, and is the biggest limitation of the system right now IMO.
What's your use case?
Without more information it's difficult to say, but it sounds like you might want to use a MTLBlitCommandEncoder to copy the data from some buffer to the drawable texture.
It should achieve a similar effect. But if it doesn't, please provide more information.
Of course if you do want to go down the document based app route, it's possible but it has some weirdness about it (mostly due to limitations with the FileDocument API). Because of the bizarre ways the document based API works, I'd still suggest using the file picker approach, despite this taking care of the sandbox access for you.
You're on the right lines with your code, but because the FileDocument API doesn't give you the URL, and there's no mechanism for loading a USDZ as an Entity from Data, you have to write the USDZ contents to a temporary directory that you can access from your app sandbox, and then load it using the standard mechanism.
This is demonstrated here: https://vimeo.com/922479510.
import RealityKit
import SwiftUI
import UniformTypeIdentifiers
struct USDDocumentLoaderDocument: FileDocument {
// The entity will hold the contents of the USDZ file.
//
var entity: Entity
static var readableContentTypes: [UTType] { [.usdz] }
init(configuration: ReadConfiguration) throws {
// Load the contents of the USDZ file.
//
guard let data = configuration.file.regularFileContents else {
throw CocoaError(.fileReadCorruptFile)
}
// Write the data to a temporary location.
//
let temporaryFile = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString, conformingTo: .usdz)
try data.write(to: temporaryFile)
// Load the USDZ from the temporary file. This blocks, which isn't ideal.
//
self.entity = try Entity.load(contentsOf: temporaryFile)
}
func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper {
throw CocoaError(.featureUnsupported)
}
}
For example, this demonstrates how to load a USDz file from a file picker, as demonstrated here: https://vimeo.com/922469524
When you tap the button, it loads the system file picker and allows the user to load a USDz file from their downloads or locally saved files.
import RealityKit
import SwiftUI
@main
struct Application: App {
@State var entity: Entity? = nil
@State var showFilePicker: Bool = false
var body: some SwiftUI.Scene {
WindowGroup {
VStack {
// A view for displaying the loaded asset.
//
RealityView(
make: { content in
// Add a placeholder entity to parent the entity to.
//
let placeholderEntity = Entity()
placeholderEntity.name = "$__placeholder"
if let loadedEntity = self.entity {
placeholderEntity.addChild(loadedEntity)
}
content.add(placeholderEntity)
},
update: { content in
guard let placeholderEntity = content.entities.first(where: {
$0.name == "$__placeholder"
}) else {
preconditionFailure("Unable to find placeholder entity")
}
// If there is a loaded entity, remove the old child,
// and add the new one.
//
if let loadedEntity = self.entity {
placeholderEntity.children.removeAll()
placeholderEntity.addChild(loadedEntity)
}
}
)
// A button that displays a file picker for loading a USDZ.
//
Button(
action: {
showFilePicker.toggle()
},
label: {
Text("Load USDZ")
}
)
.padding()
}
.fileImporter(isPresented: $showFilePicker, allowedContentTypes: [.usdz]) { result in
// Get the URL of the USDZ picked by the user.
//
guard let url = try? result.get() else {
print("Unable to get URL")
return
}
Task {
// As the app is sandboxed, permission needs to be
// requested to access the file, as it's outside of
// the sandbox.
//
if url.startAccessingSecurityScopedResource() {
defer {
url.stopAccessingSecurityScopedResource()
}
// Load the USDZ asynchronously.
//
self.entity = try await Entity(contentsOf: url)
}
}
}
}
}
}
I'm curious what you want the flow of the app to be like in an ideal world? FileDocument isn't the only way to get the USDZ loaded in, so I wouldn't rely on it unless you have to.
@KTRosenberg Eye tracking in mixed and immersive space is supported in the sense that looking at a thing and tapping it can trigger a gesture, but any information about the interaction or the "gaze" is entirely abstracted away. For example on macOS/iPadOS in SwiftUI, you can use onHover(perform:) to trigger a closure when the users mouse enters a View, however there is no way to do this on visionOS, as Apple does not inform you of the gesture until the user taps to confirm the hit.
Apple are of course aware of where you are looking as they use the information to do automatic hover state on views in SwiftUI, or to update the state of any Entity that has a HoverEffectComponent, but if you either want to do drawing outside of RealityKit (in which case the HoverEffectComponent will not be available, and gaze information in general is unavailable), or you want to do custom hover drawing in either SwiftUI or RealityKit, you are stuck.
It's understandable why this approach has been taken if Apple are protective of gaze information for privacy concerns. If you has the ability to know when a view is hovered, there's many ways you could abuse that to determine where the user is looking. Apple clearly feels this would be abused, likely by advertisers and bad actors.
Of course... I can't help but feel that if Apple had a way to inject custom drawing into RealityKit (not just pre-canned Entities, or products of RealityComposer), this would be lessoned. For example, it would be nice to be able to stream geometry updates to a ModelEntity, without having to rebuild MeshResource - if you could have some way of indicating that the shape (for example) has changed, and it would request a new position buffer only. But even with that, RealityKit still has a long way to go to fill the gaps that force you to use Metal, including proper geometry hit testing against the surface of the mesh, and true custom shaders that allow me to describe a custom BRDF or custom effects, instead of the not-that-helpful CustomMaterial. But I guess this is a v1. :shrug:
I haven't seen the issues you're seeing, but here is how we use it in our app and it works fine (sometimes it struggles to play smoothly on heavy assets).
// Add the opacity component, otherwise the animation doesn't do anything.
//
someEntity.components.set(OpacityComponent(opacity: 1.0))
// Create an animation to fade the opacity to zero.
//
let animationDefinition = FromToByAnimation(from: Float(1.0), to: Float(0.0), duration: 1.0, bindTarget: .opacity)
if let animationResource = try? AnimationResource.generate(with: animationDefinition) {
// Play the animation.
//
someEntity.playAnimation(animationResource)
}
Not sure what's going wrong on your side, but hopefully this code snippet helps.
I'm afraid eye tracking information isn't currently available at all, regardless of the immersion level of your app. When creating an app in the mixed space, Apple handles eye tracking for you (but it's abstracted away), but when creating a fully immersive app you get nothing.
You can't have executable code at the top level, as it's not clear when that code is executed.
Remove those lines, or move them into a function that can be called.
func printRange() {
let range01INT1 = 100000 ... 199999
print("Range:",range01INT1,"Number of numbers", range01INT1.count)
}
I'd definitely get started with using the tools in Xcode, and review the link to the docs I shared: https://developer.apple.com/documentation/xcode/source-control-management. The for the most part, if you're remaining local, you only have to worry about committing the changes.
When you do want to start working with GitHub, I believe their free account supports private repositories, so you don't have to share your code with the world.
It's worth bearing in mind that your file structure for development is different to the file structure for the compiled application. some.data may not be at the same level as ContentView.swift, or even available to your application by default.
The best way to achieve what you need is to setup Xcode to copy the some.data file into your bundle as a resource, which you can then access at runtime.
Add the file to the application bundle
In Xcode, select your Project in the project navigator.
Find the application target for your project in the Targets list. This will usually have a little app icon.
Navigate to "Build Phases".
Expand the "Copy Bundle Resources".
Press the "+" button and in the dialog that opens, find your some.data file, and click "Add".
Add the file to the test bundle
The steps are very similar for adding it to the testing target.
In Xcode, select your Project in the project navigator.
Find the test target for your project in the Targets list. This will usually either have a little icon with four squares, or a little with two squares and a slider, depending on whether you are doing UI tests or code tests.
Navigate to "Build Phases".
Expand the "Copy Bundle Resources".
Press the "+" button and in the dialog that opens, find your some.data file, and click "Add".
Access the file in code
Once your file is available in the bundle, you can access it wherever you need it, whether that's a test, some UI code, or somewhere else.
You can get the URL to your file as follows:
Bundle.main.url(forResource: "some", withExtension: "data", subdirectory: nil)
Once you have the URL, you access it however you want. For example, in my test app, some.data is a basic text file so I use to print the value of a Text view.
struct ContentView: View {
private var contentsOfFile: String? {
guard let url = Bundle.main.url(forResource: "some", withExtension: "data", subdirectory: nil) else {
return nil
}
return try? String(contentsOf: url)
}
var body: some View {
Text(contentsOfFile ?? "Unable to read file")
.padding()
}
}
I think really the only option that gives you what you need is Git or some form of version control, so you could keep track of changes, diff changes, revert changes...etc.
I'd be curious why you're averse to git? Is it because it feels like a lot of ceremony for saving your files? Or that you want to keep everything local?
If you're not collaborating with others, you don't necessarily need a server or host like GitHub to use git - you can use it entirely locally if you want your changes to remain local. This will give you some of the benefits of Git, but remain local to your file system. Bear in mind though that you lose the benefits of storing your code externally, or easily moving your work to another machine if necessary, but pushing to a remote server can be added later if your needs change.
To setup Git locally, you have three options:
When you create the project, choose "Create git repository on my Mac"
This is obviously the easiest way to get started, and then you can use Xcode to manage changes. But... if you have an existing project, it's probably not that helpful. But is a good option when starting a project from scratch.
Add a Git repository for an existing project from Xcode
This is useful if you have an existing project structure and you want to setup a git repository around it.
In Xcode, navigate to Integrate > New Git Repository...
Select the project in the dialog, and click "Create"
Xcode will setup that local repo, and automatically commit your project structure, allowing you to start making changes and diffing immediately.
Initialise an empty git repository from the command line
This is also useful when you have an existing project. It doesn't really do anything differently, but allows you to use terminal if that's your preference.
In the terminal application, navigate to a folder where you want to store your project, making a new folder if necessary.
Run the command git init.
Copy your project files into that directory if necessary.
If you've copied your files into that new directory, git won't automatically commit them. You must do this manually. On the command line enter: git commit --message "The initial commit of my project".
You now have basically the same setup as performed by Xcode, and can use Xcode to manage the changes, diff the changes...etc.
Saving changes
You can either manage your project from Xcode using the Source Control features, or your can continue to commit changes on the command line.
Either approach you take (and I'd suggest Xcode), your repo remains local and fully functional. You can work offline, and perform most of the operations you need to achieve, without the code ever leaving your machine.
I know you wanted to avoid Git, but I think a local git repo is really the only option that achieves what you need.
Sadly, there's no way to do this at the moment other than the approach you've taken which I agree doesn't smell right.
As you've found, the only approach that works right now is as using withobservationtracking(_:onchange:). Any Observable property that you access in the apply method will trigger onChange when it changes. But it will only trigger it once, so you must call withobservationtracking(_:onchange:) again to re-register the observation.
The Metal API and metal-cpp are supported, however with some significant caveats:
Metal can only be used for writing fully immersive applications. In short, applications that take over all drawing, and do not exist in the mixed space or overlayed on to the real world. This can be useful for games or full experiences.
MTKView is not supported for immersive applications. Instead you must use the CompositorServices API and CompositorLayer view type to perform custom drawing using your render loop.
There are additional steps needed that are unique to visionOS, such as foveated rendering, head tracking...etc. Some of these are non-trivial, which can make adoption difficult.
VisionOS is less forgiving about maintaining frame rates. If you don't update, draw and present your frames quickly enough you will drop frames, and provide a poor experience for users.
I'd recommend starting with this video from WWDC '23, as it provides a good overview of what needs to be done: https://developer.apple.com/wwdc23/10089.